在科学研究界,大脑中的记忆信息通常被认为储存在突触中 - 这是一个着名的假设归因于心理学家唐纳德Hebb。然而,存在少数少数群体,在分子(RNA或DNA)水平的神经元内储存内存的少数群体 - 一种称为细胞内在假设的替代假设,由心理学家Randy Gallistel创造。在本文中,我们审查了来自论证双方的关键实验证据。我们从Eric Kandel关于海绵的研究开始,这提供了第一个支持突触假设的证据。接下来,我们触及John O'Keefe(陈述内存和海马)和Joseph Ledoux(程序恐惧记忆和Amygdala)的小鼠实验。然后,我们将突触介绍为当今人工智能神经网络的基本构建块。在此之后,我们描述了大卫格兰茨曼在海绵中解离记忆储存和突触变化的研究,以及Susumu Tonegawa在使用激光使用激光器重新激活逆行失忆的实验。从那里,我们突出了Sigund Hesslow在雪貂的条件暂停的实验,Beatrice Gelber在没有突触的单细胞生物体中的调理实验(ParameCium Aurelia)。随后是David Glanzman的描述,使用RNA在海块之间移植内存的实验。最后,我们概述了Brian Dia和Kerry Ressler对父母从父母到后代的小鼠的DNA转移的实验。我们得出结论,对更广泛的心理领域的一些潜在影响。
translated by 谷歌翻译
The world currently offers an abundance of data in multiple domains, from which we can learn reinforcement learning (RL) policies without further interaction with the environment. RL agents learning offline from such data is possible but deploying them while learning might be dangerous in domains where safety is critical. Therefore, it is essential to find a way to estimate how a newly-learned agent will perform if deployed in the target environment before actually deploying it and without the risk of overestimating its true performance. To achieve this, we introduce a framework for safe evaluation of offline learning using approximate high-confidence off-policy evaluation (HCOPE) to estimate the performance of offline policies during learning. In our setting, we assume a source of data, which we split into a train-set, to learn an offline policy, and a test-set, to estimate a lower-bound on the offline policy using off-policy evaluation with bootstrapping. A lower-bound estimate tells us how good a newly-learned target policy would perform before it is deployed in the real environment, and therefore allows us to decide when to deploy our learned policy.
translated by 谷歌翻译
量子点(QDS)阵列是一个有前途的候选系统,实现可扩展的耦合码头系统,并用作量子计算机的基本构建块。在这种半导体量子系统中,设备现在具有数十个,必须仔细地将系统仔细设置为单电子制度并实现良好的Qubit操作性能。必要点位置的映射和栅极电压的电荷提出了一个具有挑战性的经典控制问题。随着QD Qubits越来越多的QD Qubits,相关参数空间的增加充分以使启发式控制不可行。近年来,有一个相当大的努力自动化与机器学习(ML)技术相结合的基于脚本的算法。在这一讨论中,我们概述了QD器件控制自动化进展的全面概述,特别强调了在二维电子气体中形成的基于硅和GaAs的QD。将基于物理的型号与现代数值优化和ML相结合,证明在屈服高效,可扩展的控制方面已经证明非常有效。通过计算机科学和ML的理论,计算和实验努力的进一步整合,在推进半导体和量子计算平台方面具有巨大的潜力。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
当前的量子点(QD)设备的自动传动方法在显示出一些成功的同时,缺乏对数据可靠性的评估。当自主系统处理嘈杂或低质量数据时,这会导致意外的失败。在这项工作中,我们为QD设备的强大自动调整提供了一个框架,该QD设备将机器学习(ML)状态分类器与数据质量控制模块结合在一起。数据质量控制模块充当“守门人”系统,确保只有国家分类器处理可靠的数据。较低的数据质量会导致设备重新校准或终止。为了训练两个ML系统,我们通过结合QD实验的典型合成噪声来增强QD仿真。我们确认,在状态分类器的训练中包含合成噪声可以显着提高性能,在测试实验数据时,准确性为95.0(9)%。然后,我们通过表明状态分类器的性能随着预期的数据质量而恶化,从而验证数据质量控制模块的功能。我们的结果为嘈杂的QD设备的自动调整建立了强大而灵活的ML框架。
translated by 谷歌翻译
机器人辅助手术现已在临床实践中成熟,已成为几种临床适应症的黄金标准临床治疗选择。有机器人辅助手术的领域预计将在未来十年内大幅增长,其中一系列新的机器人设备出现以解决不同专业的未满足临床需求。充满活力的手术机器人研究界是概念化这种新系统的关键,以及开发和培训工程师和科学家们将它们转化为实践。 Da Vinci研究套件(DVRK),学术界和行业合作努力重新登记达芬奇外科系统(直观的Surgical Inc,USA)作为用于外科机器人研究的研究平台,是解决A的关键倡议在外科机器人中进入新研究群体的障碍。在本文中,我们对过去十年来的DVRK促进的出版物进行了广泛的审查。我们将研究努力分类为不同的类别,并概述了机器人社区的一些主要挑战和需求,以维护这一倡议并建立在它上面。
translated by 谷歌翻译
随着空间的尺寸增加,在真实数据中分类高维形状的问题在复杂性中增长。对于识别不同几何形状的凸形形状的情况,最近提出了一种新的分类框架,其中使用一种称为射线的一组一维表示的交叉点,其中具有形状的边界来识别特定几何形状。基于射线的分类(RBC)已经使用两维和三维形状的合成数据集进行了经验验证的(Zwolak等人。在第三讲习班关于机器学习和物理科学(Neurips 2020),温哥华,加拿大的第三次研讨会的程序中[ arxiv:2010年12月11日,2010年12月11日,最近也已经通过实验验证(Zwolak等,Prx量子2:020335,2021)。在这里,我们建立了由关键角度度量定义的形状分类所需的光线数量的绑定,用于任意凸形形状。对于两个维度,我们在形状的长度,直径和外部角度方面导出了射线数量的下限。对于$ \ mathbb {r} ^ n $的凸多台,我们将此结果概括为与二向角度的函数和多边形面的几何参数给出的类似绑定。该结果使得能够使用比体积或基于表面的方法基本更少的数据元素估计高维形状的不同方法。
translated by 谷歌翻译
Accurate determination of a small molecule candidate (ligand) binding pose in its target protein pocket is important for computer-aided drug discovery. Typical rigid-body docking methods ignore the pocket flexibility of protein, while the more accurate pose generation using molecular dynamics is hindered by slow protein dynamics. We develop a tiered tensor transform (3T) algorithm to rapidly generate diverse protein-ligand complex conformations for both pose and affinity estimation in drug screening, requiring neither machine learning training nor lengthy dynamics computation, while maintaining both coarse-grain-like coordinated protein dynamics and atomistic-level details of the complex pocket. The 3T conformation structures we generate are closer to experimental co-crystal structures than those generated by docking software, and more importantly achieve significantly higher accuracy in active ligand classification than traditional ensemble docking using hundreds of experimental protein conformations. 3T structure transformation is decoupled from the system physics, making future usage in other computational scientific domains possible.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based Neural Architecture Search (NAS) method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. To this end, we introduce the Pseudo-Inverted Bottleneck conv block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2. Furthermore, with less layers, not only does it achieve higher accuracy with lower GMACs and parameter count, GradCAM comparisons show that our network is able to better detect distinctive features of target objects compared to DARTS.
translated by 谷歌翻译